ICLR 2020 ·The Eighth International Conference on Learning Representations
Sun Apr 26th through May 1st
Virtual Only Conference
Virtual Conference
ICLR 2020 Virtual SiteGeneral Chair
- Alexander Rush, Cornell Tech
Senior Program Chair
- Shakir Mohamed, DeepMind
Program Chairs
- Dawn Song, UC Berkeley
- Kyunghyun Cho, NYU & FAIR
- Martha White, University of Alberta
Area Chairs
Virtual Chairs
- Hendrik Strobelt, MIT-IBM Watson AI Lab, IBM Research
Workshop Chairs
- Gabriel Synnaeve, Facebook AI Research
- Asja Fischer, Ruhr University Bochum
Diversity+Inclusion Chairs
- Animashree Anandkumar - Cal Tech / NVidia
- Kevin Swersky - Google AI
Logistics Chairs
- Timnit Gebru - Google Brain
- Esube Bekele - In-Q-Tel
Socials Chair
- Adam White - DeepMind
Contact
The organizers can be contacted here.
Sponsors
The generous support of our sponsors allowed us to reduce our ticket price by about 50%, and support diverisy at the meeting with travel awards. In addition, many accepted papers at the conference were contributed by our sponsors.
Become a 2025 SponsorImportant Dates
About Us
The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.
ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.
Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.
A non-exhaustive list of relevant topics explored at the conference include:
- unsupervised, semi-supervised, and supervised representation learning
- representation learning for planning and reinforcement learning
- representation learning for computer vision and natural language processing
- metric learning and kernel learning
- sparse coding and dimensionality expansion
- hierarchical models
- optimization for representation learning
- learning representations of outputs or states
- optimal transport
- theoretical issues in deep learning
- societal considerations of representation learning including fairness, safety, privacy, and interpretability, and explainability
- visualization or interpretation of learned representations
- implementation issues, parallelization, software platforms, hardware
- climate, sustainability
- applications in audio, speech, robotics, neuroscience, biology, or any other field